An Accelerated Method for Derivative-Free Smooth Stochastic Convex Optimization
نویسندگان
چکیده
We consider an unconstrained problem of minimizing a smooth convex function which is only available through noisy observations its values, the noise consisting two parts. Similar to stochastic optimization problems, first part nature. The second additive unknown nature but bounded in absolute value. In two-point feedback setting, i.e., when pairs values are available, we propose accelerated derivative-free algorithm together with complexity analysis. bound our by factor $\sqrt{n}$ larger than for gradient-based algorithms, where $n$ dimension decision variable. also nonaccelerated similar gradient--based algorithm; that is, does not have any dimension-dependent except logarithmic. Notably, if difference between starting point and solution sparse vector, both obtain better uses 1-norm proximal setup rather Euclidean setup, standard choice problems.
منابع مشابه
An Accelerated Method for Derivative-Free Smooth Stochastic Convex Optimization
We consider an unconstrained problem of minimization of a smooth convex function which is only available through noisy observations of its values, the noise consisting of two parts. Similar to stochastic optimization problems, the first part is of a stochastic nature. On the opposite, the second part is an additive noise of an unknown nature, but bounded in the absolute value. In the two-point ...
متن کاملInexact Restoration Method for Derivative-Free Optimization with Smooth Constraints
A new method is introduced for solving constrained optimization problems in which the derivatives of the constraints are available but the derivatives of the objective function are not. The method is based on the Inexact Restoration framework, by means of which each iteration is divided in two phases. In the first phase one considers only the constraints, in order to improve feasibility. In the...
متن کاملDerivative free optimization method
Derivative free optimization (DFO) methods are typically designed to solve optimization problems whose objective function is computed by a “black box”; hence, the gradient computation is unavailable. Each call to the “black box” is often expensive, so estimating derivatives by finite differences may be prohibitively costly. Finally, the objective function value may be computed with some noise, ...
متن کاملAn adaptive accelerated first-order method for convex optimization
In this paper, we present a new accelerated variant of Nesterov’s method for solving a class of convex optimization problems, in which certain acceleration parameters are adaptively (and aggressively) chosen so as to: preserve the theoretical iteration-complexity of the original method, and; substantially improve its practical performance in comparison to the other existing variants. Computatio...
متن کاملA derivative-free comirror algorithm for convex optimization
We consider the minimization of a nonsmooth convex function over a compact convex set subject to a nonsmooth convex constraint. We work in the setting of derivative-free optimization (DFO), assuming that the objective and constraint functions are available through a black-box that provides function values for lower-C2 representation of the functions. Our approach is based on a DFO adaptation of...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Siam Journal on Optimization
سال: 2022
ISSN: ['1095-7189', '1052-6234']
DOI: https://doi.org/10.1137/19m1259225